4 research outputs found

    Conversational AI Agents: Investigating AI-Specific Characteristics that Induce Anthropomorphism and Trust in Human-AI Interaction

    Get PDF
    The investment in AI agents has steadily increased over the past few years, yet the adoption of these agents has been uneven. Industry reports show that the majority of people do not trust AI agents with important tasks. While the existing IS theories explain users’ trust in IT artifacts, several new studies have raised doubts about the applicability of current theories in the context of AI agents. At first glance, an AI agent might seem like any other technological artifact. However, a more in-depth assessment exposes some fundamental characteristics that make AI agents different from previous IT artifacts. The aim of this dissertation, therefore, is to identify the AI-specific characteristics and behaviors that hinder and contribute to trust and distrust, thereby shaping users’ behavior in human-AI interaction. Using a custom-developed conversational AI agent, this dissertation extends the human-AI literature by introducing and empirically testing six new constructs, namely, AI indeterminacy, task fulfillment indeterminacy, verbal indeterminacy, AI inheritability, AI trainability, and AI freewill

    Update Assimilation in App Markets: Is There Such a Thing as Too Many Updates?

    No full text
    Extant literature suggests that faster app evolution (in terms of changes in quality and functionality) leads to increased app success (in terms of survival and demand). This evolution-success relationship, however, does not account for users™ limited ca

    Conversational Assistants: Investigating Privacy Concerns, Trust, and Self-Disclosure

    No full text
    By the end of 2017 more than 33 million voice-based devices will be in circulation, many of which will include conversational assistants such as Amazon’s Alexa and Apple’s Siri. These devices require a significant amount of personal information from users to learn their preferences and provide them with personalized responses. This creates an interesting and important tension: the more information users disclose, the greater the value they receive from these devices; however, due to concerns for the privacy of personal information, users tend to disclose less information. In this study, we examine the role of reciprocal self-disclosure and trust within the novel and emerging context of conversational assistants. Specifically, we investigate the effect of conversational assistants’ self-disclosure on the relationship between users’ privacy concerns and their self-disclosure. Further, we explore the mechanism through which self-disclosure by conversational assistants influences this relationship, namely, the role of cognitive trust and emotional trust
    corecore